Journals
  Publication Years
  Keywords
Search within results Open Search
Please wait a minute...
For Selected: Toggle Thumbnails
Optimization of maintenance strategy model for multi-component system based on performance-based contract
XU Feixue, LIU Qinming, OUYANG Hailing, YE Chunming
Journal of Computer Applications    2021, 41 (4): 1184-1191.   DOI: 10.11772/j.issn.1001-9081.2020071033
Abstract345)      PDF (1141KB)(491)       Save
Aiming at the problems of the low maintenance efficiency of multi-component series system and the low profit of suppliers, considering the economic correlation between components, a maintenance strategy model for multi-component system based on performance-based contract was proposed. First, Weibull distribution was used to describe the service life law of each component in the system, and different maintenance strategies were implemented by judging the relationship between the usage of each component, the preventive maintenance threshold and the opportunistic maintenance threshold. Second, the probability of each maintenance activity and the corresponding number of maintenances were calculated in the unit renewal cycle, and a maintenance strategy model for multi-component system based on performance-based contract was established with the preventive maintenance threshold and the opportunity maintenance threshold as decision variables and with the goal of maximizing the profit of suppliers. Finally, Grey Wolf Optimizer(GWO) algorithm was used to solve the proposed model. The numerical example analysis showed that compared with Genetic Algorithm(GA) and Particle Swarm Optimization(PSO) algorithm, the improved GWO algorithm had the precision improved by 22.6% and 7.6% respectively, and the profit margin of the proposed performance model was as high as 25.3% based on the linear return function, which was increased by 5.2% compared with that of the traditional cost model. The model and algorithm of multi-component system maintenance strategy optimization based on performance-based contract can effectively solve the problem of low maintenance quality and efficiency of suppliers, and provide a basis for suppliers and operators to jointly make maintenance contracts.
Reference | Related Articles | Metrics
Cyclic iterative ontology construction method based on demand assessment and response
DAI Tingting, ZHOU Le, YU Qinyong, HUNAG Xifeng, XIE Jun, SONG Minghui, LIU Qiao
Journal of Computer Applications    2020, 40 (9): 2712-2718.   DOI: 10.11772/j.issn.1001-9081.2020010039
Abstract399)      PDF (1259KB)(437)       Save
Aiming at the problem that the METHONTOLOGY method and the seven-step method, which are more mature than the IEEE 1074-1995 software development standard, do not consider the ontology quality assessment and its response, a new cyclic iterative ontology construction method based on demand assessment and response was proposed. First, based on the software development V-model and ontology testing framework, demand analysis for the constructed ontology was conducted, so as to define a set of ontology test design documents that emphasize meeting the demands rather than knowledge richness. Second, core architecture and architecture knowledge system were refined, and the test documents were updated. Finally, the expressions of knowledge satisfiability on the core architecture, architecture knowledge system and demand analysis were respectively evaluated by using the test documents, and the ontology was updated locally or globally when the expressions of knowledge were not satisfied. Compared with the common methods of ontology construction, the proposed method can realize the evaluation and iterative evolution in the ontology construction process. Furthermore, the government ontology established by this method not only provides a knowledge representation framework for the relevant knowledge of item transaction, but also provides a new idea for the calculation of government knowledge. And the developed government affair process optimization program based on the proposed method has successfully applied in a provincial government affair big data analysis field, so as to confirm the rationality and effectiveness of the method to a certain extent.
Reference | Related Articles | Metrics
Mass and calcification classification method in mammogram based on multi-view transfer learning
XIAO He, LIU Zhiqin, WANG Qingfeng, HUANG Jun, ZHOU Ying, LIU Qiyu, XU Weiyun
Journal of Computer Applications    2020, 40 (5): 1460-1464.   DOI: 10.11772/j.issn.1001-9081.2019101744
Abstract375)      PDF (1943KB)(275)       Save

In order to solve the problem of insufficient available training data in the classification task of breast mass and calcification, a multi-view model based on secondary transfer learning was proposed combining with imaging characteristics of mammogram. Firstly, CBIS-DDSM (Curated Breast Imaging Subset of Digital Database for Screening Mammography) was used to construct the breast local tissue section dataset for the pre-training of the backbone network, and the domain adaptation learning of the backbone network was completed, so the backbone network had the essential ability of capturing pathological features. Then, the backbone network was secondarily transferred to the multi-view model and was fine-tuned based on the dataset of Mianyang Central Hospital. At the same time, the number of positive samples in the training was increased by CBIS-DDSM to improve the generalization ability of the network. The experimental results show that the domain adaption learning and data augmentation strategy improves the performance criteria by 17% averagely and achieves 94% and 90% AUC (Area Under Curve) values for mass and calcification respectively.

Reference | Related Articles | Metrics
Microservice identification method based on class dependencies under resource constraints
SHAO Jianwei, LIU Qiqun, WANG Huanqiang, CHEN Yaowang, YU Dongjin, SALAMAT Boranbaev
Journal of Computer Applications    2020, 40 (12): 3604-3611.   DOI: 10.11772/j.issn.1001-9081.2020040495
Abstract331)      PDF (1213KB)(378)       Save
To effectively improve the automation level of legacy software system reconstruction based on the microservice architecture, according to the principle that there is a certain correlation between resource data operated by two classes with dependencies, a microservice identification method based on class dependencies under resource constraints was proposed. Firstly, the class dependency graph was built based on the class dependencies in the legacy software program, and the resource entity label for each class was set. Then, a dividing algorithm was designed for the class dependency graph based on the resource entity label, which was used to divide the original software system and obtain the candidate microservices. Finally, the candidate microservices with higher dependency degrees were combined to obtain the final microservice set. Experimental results based on four open source projects from GitHub demonstrate that, the proposed method achieves the microservice division accuracy of higher than 90%, which proves that it is reasonable and effective to identify microservices by considering both class dependencies and resource constraints.
Reference | Related Articles | Metrics
Fast convergence average TimeSynch algorithm for apron sensor network
CHEN Weixing, LIU Qingtao, SUN Xixi, CHEN Bin
Journal of Computer Applications    2020, 40 (11): 3407-3412.   DOI: 10.11772/j.issn.1001-9081.2020030290
Abstract317)      PDF (665KB)(241)       Save
The traditional Average TimeSynch (ATS) for APron Sensor Network (APSN) has slow convergence and low algorithm efficiency due to its distributed iteration characteristics, based on the principle that the algebraic connectivity affects the convergence speed of the consensus algorithm, a Fast Convergence Average TimeSynch (FCATS) was proposed. Firstly, the virtual link was added between the two-hop neighbor nodes in APSN to increase the network connectivity. Then, the relative clock skew, logical clock skew and offset of the node were updated based on the information of the single-hop and two-hop neighbor nodes. Finally, according to the clock parameter update process, the consensus iteration was performed. The simulation results show that FCATS can be converged after the consensus iteration. Compared with ATS, it has the convergence speed increased by about 50%. And under different topological conditions, the convergence speed of it can be increased by more than 20%. It can be seen that the convergence speed is significantly improved.
Reference | Related Articles | Metrics
Computation offloading method for workflow management in mobile edge computing
FU Shucun, FU Zhangjie, XING Guowen, LIU Qingxiang, XU Xiaolong
Journal of Computer Applications    2019, 39 (5): 1523-1527.   DOI: 10.11772/j.issn.1001-9081.2018081753
Abstract748)      PDF (853KB)(438)       Save
The problem of high energy consumption for mobile devices in mobile edge computing is becoming increasingly prominent. In order to reduce the energy consumption of the mobile devices, an Energy-aware computation Offloading for Workflows (EOW) was proposed. Technically, the average waiting time of computing tasks in edge devices was analyzed based on queuing theory, and the time consumption and energy consumption models for mobile devices were established. Then a corresponding computation offloading method, by leveraging NSGA-Ⅲ (Non-dominated Sorting Genetic Algorithm Ⅲ) was designed to offload the computing tasks reasonably. Part computing tasks were processed by the mobile devices, or offloaded to the edge computing platform and the remote cloud, achieving the goal of energy-saving for all the mobile devices. Finally, comparison experiments were conducted on the CloudSim platform. The experimental results show that EOW can effectively reduce the energy consumption of all the mobile devices and satisfy the deadline of all the workflows.
Reference | Related Articles | Metrics
Pneumothorax detection and localization in X-ray images based on dense convolutional network
LUO Guoting, LIU Zhiqin, ZHOU Ying, WANG Qingfeng, CHENG Jiezhi, LIU Qiyu
Journal of Computer Applications    2019, 39 (12): 3541-3547.   DOI: 10.11772/j.issn.1001-9081.2019050884
Abstract277)      PDF (1217KB)(302)       Save
There are two main problems about pneumothorax detection in X-ray images. The pneumothorax usually overlaps with tissues such as ribs and clavicles in X-ray images, easily causing missed diagnosis and the performance of the existing pneumothorax detection methods remain to be improved. The suspicious pneumothorax area detection cannot be exploited by the convolutional neural network-based algorithms, lacking the interpretability. Aiming at the problems, a novel method combining Dense convolutional Network (DenseNet) and gradient-weighted class activation mapping was proposed. Firstly, a large-scale chest X-ray dataset named PX-ray was constructed for model training and testing. Secondly, the output node of the DenseNet was modified and a sigmoid function was added after the fully connected layer to classify the chest X-ray images. In the training process, the weight of cross entropy loss function was set to alleviate the problem of data imbalance and improve the accuracy of the model. Finally, the parameters of the last convolutional layer of the network and the corresponding gradients were extracted, and the areas of the pneumothorax type were roughly located by gradient-weighted class activation mapping. The experimental results show that, the proposed method has the detection accuracy of 95.45%, and has the indicators such as Area Under Curve (AUC), sensitivity, specificity all higher than 0.9, performs the classic algorithms of VGG19, GoogLeNet and ResNet, and realizes the visualization of pneumothorax area.
Reference | Related Articles | Metrics
Automatic method for left atrial appendage segmentation from ultrasound images based on deep learning
HAN Luyi, HUANG Yunzhi, DOU Haoran, BAI Wenjuan, LIU Qi
Journal of Computer Applications    2019, 39 (11): 3361-3365.   DOI: 10.11772/j.issn.1001-9081.2019040771
Abstract547)      PDF (885KB)(243)       Save
Segmenting Left Atrial Appendage (LAA) from ultrasound image is an essential step for obtaining the clinical indicators, and the prerequisite and difficulty for automatic and accurate segmentation is locating the target accurately. Therefore, a method combining with automatic location based on deep learning and segmenting algorithm based on model was proposed to accomplish the automatic segmentation of LAA from ultrasound images. Firstly, You Only Look Once (YOLO) model was trained as the network structure for the automatic location of LAA. Secondly, the optimal weight files were determined by the validation set and the bounding box of LAA was predicted. Finally, based on the correct location, the bounding box was magnified 1.5 times as the initial contour, and C-V (Chan-Vese) model was utilized to realize the automatic segmentation of LAA. The performance of automatic segmentation was evaluated by 5 metrics, including accuracy, sensitivity, specificity, positive, and negative. The experimental results show that the proposed method can achieve a good automatic segmentation in different resolutions and visual modes, small samples data achieve the optimal location performance at 1000 iterations with a correct position rate of 72.25%, and C-V model can reach the accuracy of 98.09% based on the correct location. Therefore, deep learning is a rather promising technique in the automatic segmentation of LAA from ultrasound images, and it can provide a good initial contour for the segmentation algorithm based on contour.
Reference | Related Articles | Metrics
Compression method based on bit extraction of independent rule sets for packet classification
WANG Xiaolong, LIU Qinrang, LIN Senjie, Huang Yajing
Journal of Computer Applications    2018, 38 (8): 2375-2380.   DOI: 10.11772/j.issn.1001-9081.2018010069
Abstract504)      PDF (940KB)(304)       Save
The continuous expansion in scale of multi-field entries and the growing increase in bit-width bring heavy storage pressure in hardware on the Internet. In order to solve this problem, a compression method based on Bit Extraction of Independent rule Subsets (BEIS) was proposed. Firstly, some fields were merged based on the logical relationships among multiple match fields, thus reducing the number of match fields and the width of flow tables. Secondly, with the division of independent rule subsets for the merged rule set, some differentiate bits in the divided subsets were extracted to achieve the matching and searching function, further reducing the used Ternary Content Addressable Memory (TCAM) space. Finally, the lookup hardware architecture of this method was put forward. Simulation results show that, with certain time complexity, the storage space of the proposed method can be reduced by 20% compared with Field Trimmer (FT) in OpenFlow flow table; in addition, for common packet classification rule sets such as access control list and firewall in practical application, the compression ratio of 20%-40% can be achieved.
Reference | Related Articles | Metrics
Adaptive unicast routing algorithm for vertically partially connected 3D NoC
SUN Meidong, LIU Qinrang, LIU Dongpei, YAN Binghao
Journal of Computer Applications    2018, 38 (5): 1470-1475.   DOI: 10.11772/j.issn.1001-9081.2017102411
Abstract401)      PDF (876KB)(341)       Save
Traditional TSV (Through Silicon Via) table in vertically partially connected three-Dimensional Network-on-Chip (3D NoC) only stores TSV address information, which easily causes network congestion. In order to solve this problem, a record table architecture was proposed. The record table stored not only the nearest four TSV addresses to the router, but also the input-buffer occupancy and fault information of the corresponding router. Based on the record table, a novel adaptive unicast routing algorithm for the shortest transmission path was proposed. Firstly, the coordinates of current node and destination node were calculated to determine the transmission mode of packets. Secondly, by using the proposed algorithm, whether the transmission path was faulty and got information of buffer occupancy was obtained simultaneously. Finally, the optimal transmission port was determined and the packets were transmitted to the neighboring router. The experimental results under two network sizes show that the proposed algorithm has obvious advantages in average delay and throughput compared with Elevator-First algorithm. Additionally, the rates of losing packet under Random model and Shuffle traffic model are 25.5% and 29.5% respectively when the network fault rate is 50%.
Reference | Related Articles | Metrics
Trajectory privacy-preserving method based on information entropy suppression
WANG Yifei, LUO Yonglong, YU Qingying, LIU Qingqing, CHEN Wen
Journal of Computer Applications    2018, 38 (11): 3252-3257.   DOI: 10.11772/j.issn.1001-9081.2018040861
Abstract645)      PDF (1005KB)(458)       Save
Aiming at the problem of poor data anonymity and large data loss caused by excessive suppression of traditional high-dimensional trajectory privacy protection model, a new trajectory-privacy method based on information entropy suppression was proposed. A flowgraph based on entropy was generated for the trajectory dataset, a reasonable cost function according to the information entropy of spatio-temproal points was designed, and the privacy was preserved by local suppression of spatio-temproal points. Meanwhile, an improved algorithm for comparing the similarity of flowgraphs before and after suppression was proposed, and a function for evaluating the privacy gains was introduced.Finally, the proposed method was compared with the LK-Local (Length K-anonymity based on Local suppression) approach in trajectory privacy and data practicability. The experimental results on a synthetic subway transportation system dataset show that, with the same anonymous parameter value the proposed method increases the similarity measure by about 27%, reduces the data loss by about 25%, and increases the privacy gain by about 21%.
Reference | Related Articles | Metrics
Ontology model for detecting Android implicit information flow
LIU Qiyuan, JIAO Jian, CAO Hongsheng
Journal of Computer Applications    2018, 38 (1): 61-66.   DOI: 10.11772/j.issn.1001-9081.2017071970
Abstract402)      PDF (957KB)(344)       Save
Concerning the problem that the traditional information leakage detection technology can not effectively detect implicit information leakage in Android applications, a reasoning method of Android Implicit Information Flow (ⅡF) combining control structure ontology model and Semantic Web Rule Language (SWRL) inference rule was proposed. Firstly, the key elements that generate implicit information flow in control structure were analyzed and modeled to establish the control structure ontology model. Secondly, based on the analysis of the main reasons of implicit information leakage, criterion rules of implicit information flow based on Strict Control Dependence (SCD) were given and converted into SWRL inference rules. Finally, control structure ontology instances and SWRL inference rules were imported into the inference engine Jess for reasoning. The experimental results show that the proposed method can deduce a variety of implicit information flow based on SCD with different nature and the testing accuracy of sample set is 83.3%, and the reasoning time is in the reasonable interval when the branch number is limited. The proposed model can effectively assist traditional information leakage detection to improve its accuracy.
Reference | Related Articles | Metrics
Trend prediction of public opinion propagation based on parameter inversion — an empirical study on Sina micro-blog
LIU Qiaoling, LI Jin, XIAO Renbin
Journal of Computer Applications    2017, 37 (5): 1419-1423.   DOI: 10.11772/j.issn.1001-9081.2017.05.1419
Abstract806)      PDF (790KB)(544)       Save
Concerning that the existing researches on public opinion propagation model are seldom combined with the practical opinion data and digging out the inherent law of public opinion propagation from the opinion big data is becoming an urgent problem, a parameter inversion algorithm of public opinion propagation model using neural network was proposed based on the practical opinion big data. A network opinion propagation model was constructed by improving the classical disease spreading Susceptible-Infective-Recovered (SIR) model. Based on this model, the parameter inversion algorithm was used to predict the network public opinion's trend of actual cases. The proposed algorithm could accurately predict the specific heat value of public opinion compared with Markov prediction model.The experimental results show that the proposed algorithm has certain superiority in prediction and can be used for data fitting, process simulation and trend prediction of network emergency spreading.
Reference | Related Articles | Metrics
Hybrid imperialist competitive algorithm for solving job-shop scheduling problem
YANG Xiaodong, KANG Yan, LIU Qing, SUN Jinwen
Journal of Computer Applications    2017, 37 (2): 517-522.   DOI: 10.11772/j.issn.1001-9081.2017.02.0517
Abstract564)      PDF (1017KB)(581)       Save
For the Job-shop Scheduling Problem (JSP) with the objective of minimizing the makespan, a hybrid algorithm combining with Imperialist Competitive Algorithm (ICA) and Tabu Search (TS) was proposed. Based on imperialist competitive algorithm, crossover operator and mutation operator of Genetic Algorithm (GA) were applied in the hybrid algorithm as assimilation to strengthen its global search ability. To overcome the weakness of imperialist competitive algorithm in local search, TS algorithm was used to improve the offspring of assimilation. The hybrid neighborhood structure and a novel selection strategy were used by TS to make the search more efficient. By combining with the ability of global search and local search, testing on the 13 classic benchmark scheduling problems and comparing with other four hybrid algorithms in recent years, the experimental results show that the proposed hybrid algorithm is effective and stable.
Reference | Related Articles | Metrics
Clothing retrieval based on landmarks
CHEN Aiai, LI Lai, LIU Guangcan, LIU Qingshan
Journal of Computer Applications    2017, 37 (11): 3249-3255.   DOI: 10.11772/j.issn.1001-9081.2017.11.3249
Abstract555)      PDF (1166KB)(621)       Save
At present, the same or similar style clothing retrieval is mainly text-based or content-based. The text-based algorithms often require massive labled samples, and the shortages of exist label missing and annotation difference caused by artificial subjectivity. The content-based algorithms usually extract image features, such as color, shape, texture, and then measured the similarity, but it is difficult to deal with background color interference, and clothing deformation due to different angles, attitude, etc. Aiming at these problems, clothing retrieval based on landmarks was proposed. The proposed method used cascaded deep convolutional neural network to locate the key points and combined the low-level visual information of the key point region as well as the high-level semantic information of the whole image. Compared with traditional methods, the proposed method can effectively deal with the distortion of clothing and complex background interference due to angle of view and attitude. Meanwhile, it does not need huge labeled samples, and is robust to background and deformation. Experiments on two large scale datasets Fashion Landmark and BDAT-Clothes show that the proposed algorithm can effectively improve the precision and recall.
Reference | Related Articles | Metrics
High-performance regular expressions matching algorithm based on improved FPGA circuit
ZHUO Yannan, LIU Qiang, JIANG Lei, DAI Qiong
Journal of Computer Applications    2016, 36 (4): 927-930.   DOI: 10.11772/j.issn.1001-9081.2016.04.0927
Abstract598)      PDF (563KB)(410)       Save
Concerning the low throughput and too much logic resource usage in the process of regular expressions matching, an improved Deterministic Finite Automaton (DFA) regular expression matching algorithm fully based on Field-Programmable Gate Array (FPGA) logic circuit was designed. Firstly, the result that most transfer edges of each state in DFA would point intensively to the same state characteristics was counted; then an acquiescent transfer edge for each state setting in DFA was provided according to the transfer matrix of regular expressions; finally, simplified logical circuit was given, and measurement was conducted on the L7-filter rule set. The experimental result shows that, compared with the former Nondeterministic Finite Automaton (NFA) algorithm, 10%-60% rules get a higher throughput, and 62%-87% rules cost less logic resources.
Reference | Related Articles | Metrics
Firm real-time data-transmitting system based on data stream-transmitting mechanism
CAO Jian, LIU Qiong, WANG Yuan
Journal of Computer Applications    2016, 36 (3): 596-600.   DOI: 10.11772/j.issn.1001-9081.2016.03.596
Abstract585)      PDF (926KB)(667)       Save
Aiming at the low data-transmitting efficiency of the traditional message-oriented middleware in power information system, a firm real-time data-transmitting system based on data stream-transmitting mechanism was proposed. Queue caching mechanism was adopted to realize the asynchronous sending and batch confirmation of message. Data stream-transmitting mechanism was designed to eliminate the cache latency and the cost of cache resources of the data on transit node to improve the timeliness and concurrency of data transmission. Distributed and data routing thought was used-data to make the node network to the third-party system transparently and achieve a data routing distribution function. The simulation results of a provincial electric power information system data exchange scene, verified the system performance. Concurrent data exchange capacity is 3000 concurrent. Transmission speed in the gigabit bandwidth system environment is 980 MB/s. Switching delay is kept in milliseconds.
Reference | Related Articles | Metrics
Parallel sparse subspace clustering via coordinate descent minimization
WU Jieqi, LI Xiaoyu, YUAN Xiaotong, LIU Qingshan
Journal of Computer Applications    2016, 36 (2): 372-376.   DOI: 10.11772/j.issn.1001-9081.2016.02.0372
Abstract689)      PDF (877KB)(960)       Save
Since the rapidly increasing data scale imposes a great computational challenge to the problem of Sparse Subspace Clustering (SSC), the existing optimization algorithms e.g. ADMM (Alternating Direction Method of Multipliers) for SSC are implemented in a sequential way which is unable to make use of multi-core processors to improve computational efficiency. To address this issue, a parallel SSC based on coordinate descent was proposed,inspired by a simple observation that the SSC can be formulated as a sequence of sample based sparse self-expression sub-problems. The proposed algorithm solves individual sub-problems by using a coordinate descent algorithm with fewer parameters and fast convergence. Based on the fact that the self-expression sub-problems are independent, a strategy was adopted to solve these sub-problems simultaneously on different processor cores, which brings the benefits of low computer resource consumption and fast running speed, it means that that the proposed algorithm is suitable for large scale clustering. Experiments on simulated data and Hopkins-155 motion segmentation dataset demonstrate that the proposed parallel SSC method on multi-core processors significantly improves the computational efficiency and ensures the accuracy when compared with ADMM.
Reference | Related Articles | Metrics
Distributed deduplication storage system based on Hadoop platform
LIU Qing, FU Yinjin, NI Guiqiang, MEI Jianmin
Journal of Computer Applications    2016, 36 (2): 330-335.   DOI: 10.11772/j.issn.1001-9081.2016.02.0330
Abstract863)      PDF (985KB)(1309)       Save
Focusing on the issues that there is a lot of data redundancy in data center, especially the backup data has caused a tremendous waste on storage space, a deduplication prototype based on Hadoop platform was proposed. Deduplication technology which detects and eliminates redundant data in a particular data set can greatly reduce the data storage capacity and optimize the utilization of storage space. Using the two big data management tools——Hadoop Distributed File System (HDFS) and non-relational database HBase, a scalable and distributed deduplication storage system was designed and implemented. In this system, the MapReduce parallel programming framework was responsible for parallel deduplication, and HDFS was responsible for data storage after deduplication. The index table was stored in HBase for efficient chunk fingerprint indexing. The system was also tested with virtual machine image file sets. The results demonstrate that the Hadoop based distributed deduplication system can ensure high throughput and excellent scalability as well as guaranting high deduplication rate.
Reference | Related Articles | Metrics
Application of symbiotic system-based artificial fish school algorithm in feed formulation optimization
LIU Qing, LI Ying, QING Maiyu, ODAKA Tomohiro
Journal of Computer Applications    2016, 36 (12): 3303-3310.   DOI: 10.11772/j.issn.1001-9081.2016.12.3303
Abstract445)      PDF (1134KB)(428)       Save
In consideration of intelligence algorithms' extensive applicability to various types of feed formulation optimization models, the Artificial Fish Swarm Algorithm (AFSA) was firstly applied in feed formulation optimization. For meeting the required precision of feed formulation optimization, a symbiotic system-based AFSA was employed. which significantly improved the convergence accuracy and speed compared with the original AFSA. In the process of optimization, the positions of Artificial Fish (AF) individuals in solution space were directly coded as the form of solution vector to the problem via the feed ratio, a penalty-based objective function was employed to evaluate AF individuals' fitness. AF individuals performed several behavior operators to explore the solution space according to a predefined behavioral strategy. The validity of the proposed algorithm was verified on three practical instances. The verification results show that, the proposed algorithm has worked out the optimal feed formulation, which can not only remarkably reduce the fodder cost, but also satisfy various nutrition constraints. The optimal performance of the proposed algorithm is superior to the other existing algorithms.
Reference | Related Articles | Metrics
Distributed fault detection for wireless sensor network based on cumulative sum control chart
LIU Qiuyue, CHENG Yong, WANG Jun, ZHONG Shuiming, XU Liya
Journal of Computer Applications    2016, 36 (11): 3016-3020.   DOI: 10.11772/j.issn.1001-9081.2016.11.3016
Abstract650)      PDF (908KB)(434)       Save
With the stringent resources and distributed nature in wireless sensor networks, fault diagnosis of sensor nodes faces great challenges. In order to solve the problem that the existing approaches of diagnosing sensor networks have high false alarm ratio and considerable computation redundancy on nodes, a new fault detection mechanism based on Cumulative Sum Chart (CUSUM) and neighbor-coordination was proposed. Firstly, the historical data on a single node were analyzed by CUSUM to improve the sensitivity of fault diagnosis and locate the change point. Then, the fault nodes were detected though judging the status of nodes by the data exchange between neighbor nodes. The experimental results show that the detection accuracy is over 97.7% and the false alarm ratio is below 2% when the sensor fault probability in wireless sensor networks is up to 35%. Hence, the proposed algorithm has a high detection accuracy and low false alarm ratio even in the conditions of high fault probabilities and reduces the influence of sensor fault probability clearly.
Reference | Related Articles | Metrics
Matrix-structural fast learning of cascaded classifier for negative sample inheritance
LIU Yang, YAN Shengye, LIU Qingshan
Journal of Computer Applications    2015, 35 (9): 2596-2601.   DOI: 10.11772/j.issn.1001-9081.2015.09.2596
Abstract438)      PDF (930KB)(324)       Save
Due to the disadvantages such as inefficiency of getting high-quality samples, bad impact of bootstrap to the whole learning-efficiency and final classifier performance in the negative samples bootstrap process of matrix-structural learning of cascade classifier algorithm. This paper proposed a fast learning algorithm-matrix-structural fast learning of cascaded classifier for negative sample inheritance. The negative sample bootstrap process of this algorithm combined sample inheritance and gradation bootstrap, which inherited helpful samples from the negative sample set used by last training stage firstly, and then got insufficient part of sample set from the negative image set. Sample inheritance reduced the bootstrap range of useful samples, which accelerated bootstrap. And sample pre-screening, during bootstrap process, increased sample complexity and promoted final classifier performance. The experiment results show that the proposed algorithm saves 20h in training time and improves 1 percentage point in detection performance, compared with matrix-structural learning of cascaded classifier algorithm. Besides, compared with other 17 human detection algorithms, the proposed algorithm achieves good performance too. The proposed algorithm gets great improvement in training efficiency and detection performance compared with matrix-structural learning of cascaded classifier algorithm.
Reference | Related Articles | Metrics
Fast super-resolution reconstruction for single image based on predictive sparse coding
SHEN Hui, YUAN Xiaotong, LIU Qingshan
Journal of Computer Applications    2015, 35 (6): 1749-1752.   DOI: 10.11772/j.issn.1001-9081.2015.06.1749
Abstract647)      PDF (648KB)(536)       Save

The classic super-resolution algorithm via sparse coding has high computational cost during the reconstruction phase. In view of the disadvantages, a predictive sparse coding-based single image super-resolution method was proposed. In the training phase, the proposed method imposed a code prediction error term to the traditional sparse coding error function, and used an alternating minimization procedure to minimize the resultant objective function. In the testing phase, the reconstruction coefficient could be estimated by simply multiplying the low-dimensional image patch with the low-dimensional dictionary, without any need to solve sparse regression problems. The experimental results demonstrate that, compared with the classic single image super-resolution algorithm via sparse coding, the proposed method is able to significantly reduce the reconstruction time while maintaining super-resolution visual effect.

Reference | Related Articles | Metrics
Design of virtual surgery system in reduction of maxillary fracture
LI Danni, LIU Qi, TIAN Qi, ZHAO Leiyu, HE Ling, HUANG Yunzhi, ZHANG Jing
Journal of Computer Applications    2015, 35 (6): 1730-1733.   DOI: 10.11772/j.issn.1001-9081.2015.06.1730
Abstract562)      PDF (660KB)(403)       Save

Based on open source softwares of Computer Haptics, visualizAtion and Interactive in 3D (CHAI 3D) and Open Graphic Library (OpenGL), a virtual surgical system was designed for reduction of maxillary fracture. The virtual simulation scenario was constructed with real patients' CT data. A geomagic force feedback device was used to manipulate the virtual 3D models and output haptic feedback. On the basis of the original single finger-proxy algorithm, a multi-proxy collision algorithm was proposed to solve the problem that the tools might stab into the virtual organs during the simulation. In the virtual surgical system, the operator could use the force feedback device to choose, move and rotate the virtual skull model to simulate the movement and placement in real operation. The proposed system can be used to train medical students and for preoperative planning of complicated surgeries.

Reference | Related Articles | Metrics
Object classification based on discriminable features and continuous tracking
LI Zhihua LIU Qiuluan
Journal of Computer Applications    2014, 34 (5): 1275-1278.   DOI: 10.11772/j.issn.1001-9081.2014.05.1275
Abstract367)      PDF (634KB)(350)       Save

Aiming at object classification problem in heavily crowded and complex visual surveillance scenes, a real-time object classification approach was proposed based on discriminable features and continuous tracking. Firstly rapid features matching including color, shape and position was utilized to build the initial target correspondence in the whole scene, in which motion direction and velocity of the moving target were used to predict the preferable searching area in the next frame to accelerate the target matching process. And then the appearance model was utilized to rematch the occluded object without establishing the correspondence. In order to enhance the classification precision, the final object classification results were determined by the maximum probability of continuous object feature extraction and classification according to the tracking results. Experimental results show that the proposed method gets better classification precision compared with the method which do not utilized the continuous tracking,and its correct rate averagely reaches 97%. The new scheme effectively improves the performance of object classification in the complex scenes.

Reference | Related Articles | Metrics
Research and implementation of WLAN centralized management system based on control and provisioning of wireless access points protocol
LIU Qian HU Zhikun LIAO Beiping LIAO Yuanqin GUO Hailiang
Journal of Computer Applications    2014, 34 (3): 635-639.   DOI: 10.11772/j.issn.1001-9081.2014.03.0635
Abstract569)      PDF (751KB)(456)       Save

In view of maintenance difficulties and high cost in large-scale development of Wireless Local Access Network (WLAN), the Control and Provisioning of Wireless Access Points (CAPWAP) protocol that applied to communication between Access Controller (AC) and Wireless Terminator Point (WTP) was researched and implemented. In Linux environment, main features were realized, such as state machine management, and WTP centralized configuration. A platform of WLAN centralized management system based on local Medium Access Control (MAC) framework was built up. Wireshark capture tool, Chariot and Iperf were used to test the platform. The capture test results verify the feasibility of the framework, and the results of throughput and User Datagram Protocol (UDP) test also show that network performance is efficient and stable.

Related Articles | Metrics
Retransmission mechanism based on network coding in wireless networks
LIU Qilie WU Yangyang CAO Bin
Journal of Computer Applications    2014, 34 (2): 309-312.  
Abstract739)      PDF (705KB)(668)       Save
The current applications of network coding in single-hop wireless network retransmission are based Single Sender Multiple Receiver (SSMR) scenes. Therefore, this paper proposed a retransmission mechanism named NCWRM (Network Coding Wireless Retransmission Mechanism) which can be used in multiple sender multiple receiver networks. Each node in the network can be either a sender or a receiver. The node can broadcast a coded packet which is combined by multiple lost packets in the second retransmission after packet failed in transmission and the first retransmission. Multiple recipients can simultaneously get their lost packets by decoding the coded packet, which can effectively improve the efficiency of retransmission. Theoretical analysis and simulation results show that NCWRM algorithm can significantly improve system saturation throughput, while reducing overhead and packet loss rate.
Related Articles | Metrics
Personalization recommendation algorithm for Web resources based on ontology
LIANG Junjie LIU Qiongni YU Dunhui
Journal of Computer Applications    2014, 34 (11): 3135-3139.   DOI: 10.11772/j.issn.1001-9081.2014.11.3135
Abstract272)      PDF (752KB)(535)       Save

To improve the accuracy of recommended Web resources, a personalized recommendation algorithm based on ontology, named BO-RM, was proposed. Subject extraction and similarity measurement methods were designed, and ontology semantic was used to cluster Web resources. With a user's browser tracks captured, the tendency of preferences and recommendation were adjusted dynamically. Comparison experiments with collaborative filtering algorithm based on situation named CFR-RM and personalized prediction algorithm based on model were given. The results show that BO-RM has relatively stable overhead time and good performance in Mean Reciprocal Rank (MRR) and Mean Average Precision (MAP). The results prove that BO-RM improves the efficiency by using offline data analysis for large Web resources, thus it is practical. In addition, BO-RM captures the users' interest in real-time to updates the recommendation list dynamically, which meets the real needs of users.

Reference | Related Articles | Metrics
Collision attack on Zodiac algorithm
LIU Qing WEI Hongru PAN Wei
Journal of Computer Applications    2014, 34 (1): 73-77.   DOI: 10.11772/j.issn.1001-9081.2014.01.0073
Abstract401)      PDF (711KB)(534)       Save
In order to research the ability of Zodiac algorithm against the collision attack, two 8-round and 9-round distinguishers of Zodiac algorithm based on an equivalent structure of it were proposed. Firstly, collision attacks were applied to the algorithm from 12-round to 16-round by adding proper rounds before or after the 9-round distinguishers. The data complexities were 215, 231.2, 231.5, 231.7and 263.9, and the time complexities were 233.8, 249.9, 275.1, 2108and 2140.1, respectively. Then the 8-round distinguishers were applied to the full-round algorithm. The data complexity and time complexity were 260.6 and 2173.9, respectively. These results show that both full-round Zodiac-192 and full-round Zodiac-256 are not immune to collision attack.
Related Articles | Metrics
Routing algorithm in opportunistic network based on historical utility
LIU Qilie XU Meng LI Yun YANG Jun
Journal of Computer Applications    2013, 33 (02): 361-364.   DOI: 10.3724/SP.J.1087.2013.00361
Abstract818)      PDF (620KB)(461)       Save
In view of the low delivery ratio of conventional probabilistic routing in opportunistic networks, an improved routing algorithm based on History Meeting Predictability Routing (HMPR) was put forward. The algorithm was primarily based on the contact duration and the meeting frequency of history information of nodes, and predicted the utility of packets successfully delivered to the destination. Through comparing the utility value, nodes could determine packets whether to be forwarded from them to next hop nodes. The simulation results show that, compared with traditional epidemic routing and probabilistic routing, the proposed routing scheme has better performance in the delivery ratio of packets, the average delay time and the average buffer time.
Related Articles | Metrics